SEM 

PPC’s #fakenews: 9 ways bad numbers might look good

Today’s news landscape makes it tough to separate truth from fiction. Between news hackers and special-interest propaganda, it’s harder than ever to find a source of information you can trust.

Search marketing should be immune to it all. After all, we’re a metrics-first business. We count clicks, we tally conversions and we do difficult math to calculate return on investment (ROI). But did you know it’s super-easy to fake your way to seemingly good performance, due to rookie errors, bad habits or dirty tricks? Here’s how to arm yourself against the pay-per-click (PPC) version of “fake news.”

Rookie errors

It makes sense that less experienced people are more likely to make mistakes. A junior employee just doesn’t have the same level of knowledge; plus, they are hungry to make an impact on the business and their boss. These types of errors are understandable but potentially damaging to campaign health.

Comparing CTR

The higher the click-through rate (CTR), the better. It signals a well-written, relevant ad. But because different factors influence this metric, seeing CTR as a simple before-or-after rate offers little value. To show that CTR is really improved, you must take everything else into account.

In the example below, it becomes clear that CTR is more influenced by the distribution between Google Search versus Search partners and Top versus Others than it is by the ad text. To understand the impact of ad text changes, compare only [exact] keywords on the same position on the same delivery type.

No neutral tracking tool

It’s your first presentation and the client wants revenue figures. You pull them from AdWords, right? Wrong! To avoid inflated numbers (which AdWords can be guilty of), more experienced managers will use a neutral tracking tool like Omniture or Google Analytics. The same issue happens with Criteo and Facebook tracking: Businesses will see disproportionately high revenue numbers.

No peer group benchmarking

Congratulations! It’s golf season, and your sportswear client grew its sales by 20 percent. Unfortunately, you didn’t take Google benchmarking into account. It turns out that your client’s competitors grew their sales by 40 percent! Fail to normalize the numbers, and you risk painting an inaccurate picture.

Rookie errors are one thing, but this section covers lies of commission, habits and practices on tactics where PPC pros should simply know better. Regardless of the intent behind it, advertisers should keep an eye out for these deliberately skewed representations.

  • Mixing brand and non-brand in reporting. I’ve previously mentioned the sometimes-complex relationship between digital marketing and other marketing channels. Because finance dictates the budget, PPC managers may choose to impress them by engineering overly positive performances. Mixing brand and non-brand terms is a good example of this. Generic queries don’t perform as well as their branded counterparts and drive fewer conversions as a result. But they constitute most of the budget. Reporting a blend of both offers an easy (but flawed) “way out.”
  • Blending different markets together. When it’s results time, it might be that different markets vary within the same region. Beware of lazy reporting, bundling everything into one. The blended market makes things look good… at the cost of the true picture.
  • Adding a high share of existing customers. Mature businesses boast plenty of existing customers, but their customer acquisition share remains low. Cynical marketers have a cunning plan to gloss over this. Using remarketing lists for search ads (RLSA), they mix retargeting into PPC traffic.

While focusing on existing customers, return on ad spend (ROAS) skyrockets but new customer acquisition dies off. ROAS isn’t the best metric to optimize for. It won’t tell the whole story.

Percentage sorcery

Let’s have a look below at a retailer’s campaign statistics for the “shoes” category. The PPC campaign grew slightly — by 3k — but isn’t really moving the needle.

Our rogue marketer must now report on performance for “shoes” in the PPC segment (for Jan., Feb. and March). Month over month, it compares poorly to the “total marketing” segment:

Instead, they highlight the total growth rates as shown below. Fantastic! PPC growth rate is now 96 percent higher than total growth rates when you compare 7 percent against the 13 percent!

But now, a reality check. This is based on a measly uplift of just $3k revenue representing less than 1 percent of the overall revenue.

Questionable time frames

Skewing the baseline is another way to mask poor performance. Shortcut marketers simply compare a key performance indicator (KPI) from the last five days to a historic equivalent. It could be five days earlier, it could be a 30-day average before that. Whatever makes it look good enough.

By extension, a marketer may deliberately avoid year-over-year comparisons.

In this example, the marketer highlights a healthy Q2, 2018. Here are some things to note:

  • A dip in April precedes a successful May and June.
  • June grows by 65 percent against April.
  • June grows 40 percent against March.

Compare the revenue year over year and the perspective changes. In this context, Q2 doesn’t look that great after all.

Vague brand-term reporting

On pure brand terms, CPC and impression share represent the only decisive metrics. For marketers, brand growth may not boil down to performance. We can point to things like seasonality or an ad on television. Those looking for extra glory may ignore this fact, offering high-level reports. If you hear “brand revenue increased by x percent,” you should probably question it.

There’s an old quote that seems appropriate here:

There are three kinds of lies: lies, damned lies and statistics.

The best way to arm yourself against all three is to question the underlying assumptions behind the data. Ask yourself, “Does it pass the sniff test?” Check your own biases and look at the data beyond the dashboard reporting. Only then can you be sure you haven’t been a victim of one of these examples.

Related posts